7 research outputs found

    Addressing trust and mutability issues in XAI utilising case based reasoning.

    Get PDF
    Explainable AI (XAI) research is required to ensure that explanations are human readable and understandable. The present XAI approaches are useful for observing and comprehending some of the most important underlying properties of any Black-box AI model. However, when it comes to pushing them into production, certain critical concerns may arise: (1) How can end-users rely on the output of an XAI platform and trust the system? (2) How can end-users customise the platform's output depending on their own preferences In this project, we will explore how to address these concerns by utilising Cased-based Reasoning. Accordingly, we propose to exploit the neighbourhood to improve end-user trust by offering similar cases and confidence scores and using different retrieval strategies to address end-user preferences. Additionally, this project will also look at how to leverage Conversational AI and Natural Language Generation approaches to improve the interactive and engaging user experience with example-based XAI systems

    Explainable weather forecasts through an LSTM-CBR twin system.

    Get PDF
    In this paper, we explore two methods for explaining LSTM-based temperature forecasts using previous 14 day progressions of humidity and pressure. First, we propose and evaluate an LSTM-CBR twin system that generates nearest-neighbors that can be visualised as explanations. Second, we use feature attributions from Integrated Gradients to generate textual explanations that summarise the key progressions in the past 14 days that led to the predicted value

    Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method.

    No full text
    Counterfactual Explanations (cf-XAI) describe the smallest changes in feature values necessary to change an outcome from one class to another. However, presently many cf-XAI methods neglect the feasibility of those changes. In this paper, we introduce a novel approach for presenting cf-XAI in natural language (Natural-XAI), giving careful consideration to actionable and comprehensible aspects while remaining cognizant of immutability and ethical concerns. We present three contributions to this endeavor. Firstly, through a user study, we identify two types of themes present in cf-XAI composed by humans: content-related, focusing on how features and their values are included from both the counterfactual and the query perspectives; and structure-related, focusing on the structure and terminology used for describing necessary value changes. Secondly, we introduce a feature actionability taxonomy with four clearly defined categories, each accompanied by an example, to streamline the explanation presentation process. Using insights from the user study and our taxonomy, we created a generalisable template-based natural language generation (NLG) method compatible with existing explainers like DICE, NICE, and Dis-CERN, to produce counterfactuals that address the aforementioned limitations of existing approaches. Finally, we conducted a second user study to assess the performance of our taxonomy-guided NLG templates on three domains. Our findings show that the taxonomyguided Natural-XAI approach (n-XAIT ) received higher user ratings across all dimensions, with significantly improved results in the majority of the domains assessed for articulation, acceptability, feasibility, and sensitivity dimensions
    corecore